35 research outputs found

    Quality and Inequity in Digital Security Education

    Get PDF
    Few users have a formal, authoritative introduction to digital security. Rather, digital security skills are often learned haphazardly, as users filter through an overwhelming quantity of security education from a multitude of sources, hoping they're implementing the right set of behaviors that will keep them safe. In this thesis, I use computational, interview, and survey methods to investigate how users learn digital security behaviors, how security education impacts security outcomes, and how inequity in security education can create a digital divide. As a first step toward remedying this divide, I conduct a large-scale measurement of the quality of the digital security education content (i.e., security advice) that is available to users through one of their most cited sources of education: the Internet. The results of this evaluation suggest a security education ecosystem in crisis: security experts are unable or unwilling to narrow down which behaviors are most important for users' security, leaving end-users -- especially those with the least resources -- to attempt to implement the hundreds of security behaviors advised by educational materials

    The Misinformation Paradox: Older Adults are Cynical about News Media, but Engage with It Anyway

    Full text link
    Misinformation can be easily spread with the click of a button, but can cause irreversible harm and negatively impact news consumers’ ability to discern false information. Some prior work suggests that older adults may engage with (read, share, or believe) misinformation at higher rates than others. However, engagement explanations vary. In an effort to understand older adults' engagement with misinformation better, we investigate the misinformation experiences of older adults through their perception of prior media experiences. Analyzing 69 semi-structured interviews with adults ages 59+ from the US, the Netherlands, Bosnia, and Turkey, we find that people who have decades of potential exposure or experience with both online and traditional news media have reached a state of media cynicism in which they distrust most, or even all, of the news they receive. Yet, despite this media cynicism, the older adults we study rarely fact-check the media they see and continue to read and share news they distrust. These findings suggest that this paradoxical reaction to media cynicism, in addition to prior explanations such as cognitive issues and digital literacy, may in part explain older adults' engagement with misinformation. Thus, we introduce the misinformation paradox, an additional area of research worth explorin

    Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction

    Full text link
    As algorithms are increasingly used to make important decisions that affect human lives, ranging from social benefit assignment to predicting risk of criminal recidivism, concerns have been raised about the fairness of algorithmic decision making. Most prior works on algorithmic fairness normatively prescribe how fair decisions ought to be made. In contrast, here, we descriptively survey users for how they perceive and reason about fairness in algorithmic decision making. A key contribution of this work is the framework we propose to understand why people perceive certain features as fair or unfair to be used in algorithms. Our framework identifies eight properties of features, such as relevance, volitionality and reliability, as latent considerations that inform people's moral judgments about the fairness of feature use in decision-making algorithms. We validate our framework through a series of scenario-based surveys with 576 people. We find that, based on a person's assessment of the eight latent properties of a feature in our exemplar scenario, we can accurately (> 85%) predict if the person will judge the use of the feature as fair. Our findings have important implications. At a high-level, we show that people's unfairness concerns are multi-dimensional and argue that future studies need to address unfairness concerns beyond discrimination. At a low-level, we find considerable disagreements in people's fairness judgments. We identify root causes of the disagreements, and note possible pathways to resolve them.Comment: To appear in the Proceedings of the Web Conference (WWW 2018). Code available at https://fate-computing.mpi-sws.org/procedural_fairness

    Dimensions of Diversity in Human Perceptions of Algorithmic Fairness

    Full text link
    Algorithms are increasingly involved in making decisions that affect human lives. Prior work has explored how people believe algorithmic decisions should be made, but there is little understanding of which individual factors relate to variance in these beliefs across people. As an increasing emphasis is put on oversight boards and regulatory bodies, it is important to understand the biases that may affect human judgements about the fairness of algorithms. Building on factors found in moral foundations theory and egocentric fairness literature, we explore how people's perceptions of fairness relate to their (i) demographics (age, race, gender, political view), and (ii) personal experiences with the algorithmic task being evaluated. Specifically, we study human beliefs about the fairness of using different features in an algorithm designed to assist judges in making decisions about granting bail. Our analysis suggests that political views and certain demographic factors, such as age and gender, exhibit a significant relation to people's beliefs about fairness. Additionally, we find that people beliefs about the fairness of using demographic features such as age, gender and race, for making bail decisions about others, vary egocentrically: that is they vary depending on their own age, gender and race respectively.Comment: Presented at the CSCW 2019 workshop on Team and Group Diversit

    Problematic Advertising and its Disparate Exposure on Facebook

    Full text link
    Targeted advertising remains an important part of the free web browsing experience, where advertisers' targeting and personalization algorithms together find the most relevant audience for millions of ads every day. However, given the wide use of advertising, this also enables using ads as a vehicle for problematic content, such as scams or clickbait. Recent work that explores people's sentiments toward online ads, and the impacts of these ads on people's online experiences, has found evidence that online ads can indeed be problematic. Further, there is the potential for personalization to aid the delivery of such ads, even when the advertiser targets with low specificity. In this paper, we study Facebook -- one of the internet's largest ad platforms -- and investigate key gaps in our understanding of problematic online advertising: (a) What categories of ads do people find problematic? (b) Are there disparities in the distribution of problematic ads to viewers? and if so, (c) Who is responsible -- advertisers or advertising platforms? To answer these questions, we empirically measure a diverse sample of user experiences with Facebook ads via a 3-month longitudinal panel. We categorize over 32,000 ads collected from this panel (n=132n=132); and survey participants' sentiments toward their own ads to identify four categories of problematic ads. Statistically modeling the distribution of problematic ads across demographics, we find that older people and minority groups are especially likely to be shown such ads. Further, given that 22% of problematic ads had no specific targeting from advertisers, we infer that ad delivery algorithms (advertising platforms themselves) played a significant role in the biased distribution of these ads.Comment: Accepted to USENIX Security 202

    Where is the Digital Divide? A Survey of Security, Privacy, and Socioeconomics

    Get PDF
    The behavior of the least-secure user can influence security and privacy outcomes for everyone else. Thus, it is important to understand the factors that influence the security and privacy of a broad variety of people. Prior work has suggested that users with differing socioeconomic status (SES) may behave differently; however, no research has examined how SES, advice sources, and resources relate to the security and privacy incidents users report. To address this question, we analyze a 3,000 respondent, census-representative telephone survey. We find that, contrary to prior assumptions, people with lower educational attainment report equal or fewer incidents as more educated people, and that users’ experiences are significantly correlated with their advice sources, regardless of SES or resources
    corecore